‘Ogor Merp’ and the Architecture of Intent | Why We Must Master the Imagination Before the Algorithm
My nephew has had his entire life planned out since he was four years old. He’s going to be a triple threat: a video game designer, a movie director, and a screenwriter.
But a few years ago, when he was nine, he hit his first speed bump on the road to success. He failed a math test.
On the back of the page, he had drawn a detailed, six-panel comic strip titled “Real Case’s Comix!” It was a masterpiece of nine-year-old satire.
It showed his dad telling him, "You will use math for yore life!" The next panel showed my nephew’s perspective—what he actually heard: "Ogor merp 100 ep loop orp icmat!" The story peaked with a "Pop Quiz" from his teacher, and ended with my shocked nephew muttering a simple, "dang." At the very bottom, he scribbled: "The end?"
It was clever, observant storytelling. But when I explained that he actually needed those "merp loop" equations to build the physics of a game, he issued a mic-drop statement that rendered me speechless: “Actually, I don’t need to learn math. AI will just do it all for me.”
The Rise of Cognitive Apathy
It’s a family joke we still laugh about today — mostly because Case is twelve now and he still hates math. But reflecting on it recently, I realized his reaction wasn't just a funny "kid moment." It was an early look at a much larger shift in how we process the world.
In his mind, he was being efficient. But he is falling into what I now call the "Ogor Merp Gap" — the belief that because we can generate an automated answer, we no longer need the infrastructure of logic required to reach it.
This isn't just a "kid being a kid"— it's a paradigm shift in how we process the world around us. Research into “The Google Effect” or digital amnesia suggests that when we know information can be found externally, our brains stop prioritizing the retention of that knowledge. And when we use the tools to circumvent the thought process rather than streamlining the work, we run the risk of training our brains to skip the logic-building phase entirely.
We are already seeing the consequences of this mindset. A 2024 study from The Wharton School of the University of Pennsylvania observed students using Generative AI for math. The AI helped them ace practice problems with a 48% boost. However, that success was a facade. When the tool was taken away, those same students performed 17% worse on conceptual exams. They had the answers, but they had lost the "why."
Protecting the Creative Core
The irony is that my nephew already has the most valuable skill for the AI age — human intent. He can observe a situation, find the irony, and communicate it through art. But if we outsource the "struggle" of the math test to an algorithm, we aren't raising architects. We’re raising spectators.
This extends beyond the classroom. Recent research from University College London found that while AI makes production easier, it leads to a "collective loss of novelty." When everyone relies on the same algorithms, our individual perspectives are diluted. The sharp edges of original thought dissolve into a carbon copy of the last. Case’s "Ogor Merp" comic is valuable because it is messy. An AI would never make the bizarre, brilliant mistakes that make his work human.
The Efficiency Trap
We’re seeing this same "Ogor Merp Gap" show up in the workplace. There is a bit of a panic in boardrooms to set ambiguous targets to "utilize AI," usually to achieve automation-driven savings.
There is true risk here for fields built on human connection like marketing and storytelling. When we prioritize speed over expertise, we strip the humanity out of the work. If a brand voice is outsourced to a prompt just to hit a metric, it loses the idiosyncratic edges that make people actually stop and listen. By treating AI as a replacement for judgment rather than a tool for leverage, companies are taking out a high-interest loan on creativity. They might save hours on monotonous work today, but they are bankrupting original thought in the process.
The Solution: An Architecture of Intent
Despite the warnings, I’m incredibly optimistic about our future with AI. It has the power to advance our efficiency, letting us automate the monotonous so we can focus on innovating. However, we need to remember — AI is a catalyst, not the reaction itself. It still needs a human spark to ignite.
To bridge the gap, we need a model of staged autonomy that scales from the classroom to the boardroom:
Prioritize the "Analog Playground": We have to protect the "merp loops"— the messy, unpolished phase of learning and brainstorming. Whether it’s a child with a pencil or a strategist with a whiteboard, intellectual muscle is built in the struggle of analog thinking before a prompt is ever written.
AI as a "Stress Test," Not a First Draft: Shift the workflow so that AI is used to critique, not create. Produce the unaided work first, then use the algorithm to hunt for logical fallacies, missing data, or hollow arguments in your own original thought.
The Intent Requirement: We must value work based on the "why." If a student or an employee cannot explain the personal observation, the data nuance, or the strategic intent that informed a choice, the work loses its market value. In an AI-saturated world, the "soul" of the project is the only thing that isn't a commodity.
The End?
My nephew’s comic ended with a question mark: "The end?" Whether that question mark leads to a sequel where we master the logic required to be world-class creators, or one where we become entirely dependent on a prompt, is up to us.
AI won’t make us creators — it will merely make us faster. Whether we are teaching the next generation in a classroom or leading a team through a digital transformation, we have to be the masters of the machine. That mastery starts with the courage to put down the prompt and pick up the original thought.